Skip to content

Conversation

@ufechner7
Copy link
Member

@ufechner7 ufechner7 commented Sep 22, 2025

Add bench_simplify.jl

@ufechner7 ufechner7 marked this pull request as draft September 22, 2025 14:20
@codecov-commenter
Copy link

codecov-commenter commented Sep 22, 2025

Codecov Report

❌ Patch coverage is 55.00000% with 9 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/symbolic_awe_model.jl 46.66% 8 Missing ⚠️
src/mtk_model.jl 80.00% 1 Missing ⚠️

📢 Thoughts on this report? Let us know!

@ufechner7 ufechner7 requested a review from Copilot September 22, 2025 15:05
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds benchmarking capabilities for the symbolic model simplification operation. The changes introduce a new benchmark script and modify the model initialization to support timing measurements during the system simplification phase.

  • Adds a dedicated benchmark script for measuring simplify operation performance
  • Introduces a bench parameter to control timing output during model initialization
  • Modifies system creation to suppress info messages during benchmarking

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.

File Description
test/bench_simplify.jl New benchmark script that measures system simplification time
src/symbolic_awe_model.jl Adds bench parameter and timing logic for simplify operation
src/mtk_model.jl Modifies ODESystem creation to support bench mode

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

@ufechner7 ufechner7 marked this pull request as ready for review September 22, 2025 15:30
@ufechner7 ufechner7 linked an issue Sep 22, 2025 that may be closed by this pull request
@ufechner7 ufechner7 assigned ufechner7 and unassigned 1-Bart-1 Sep 23, 2025
@ufechner7 ufechner7 requested a review from 1-Bart-1 September 23, 2025 07:01
Copy link
Member

@1-Bart-1 1-Bart-1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are you benchmarking this outdated version? Why not the SymbolicAWEModels.jl code? And a benchmark already exists there: SymbolicAWEModels.jl/test/bench.jl

@ufechner7
Copy link
Member Author

Well, please review and approve this pull request. Or suggest changes I should make to this pull request. I already created a simular pull request in the other repository, but I do not want to finish that before you approved this pull request, because if you approved this one I would try to apply the same changes to the other pull request.

Key properties of this benchmark:

  • low dependency on the CPU speed
  • test passes if 80% of the nominal performance is achieved

@ufechner7 ufechner7 requested a review from 1-Bart-1 September 28, 2025 15:42
@ufechner7 ufechner7 merged commit ff44833 into main Sep 28, 2025
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Create a benchmark for simplify

4 participants